|
In computational linguistics, lexical density constitutes the estimated measure of content per functional (grammatical) and lexical units (lexemes) in total. It is used in discourse analysis as a descriptive parameter which varies with register and genre. Spoken texts tend to have a lower lexical density than written ones, for example. Lexical density may be determined thus: = the number of lexical word tokens (nouns, adjectives, verbs, adverbs) in the analysed text = the number of all tokens (total number of words) in the analysed text (The variable symbols applied herein are by no means conventional; they were arbitrarily chosen for the nonce to illustrate the example in question.) ==See also== *Content analysis *Pierce's type-token distinction 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Lexical density」の詳細全文を読む スポンサード リンク
|